7 research outputs found

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Attention-Weighted Texture and Depth Bit-Allocation in General-Geometry Free-Viewpoint Television

    No full text

    A MOTION-BASED BINARY PARTITION TREE APPROACH TO VIDEO OBJECT SEGMENTATION

    No full text
    ABSTRACT This paper describes an approach for generating Binary Partition Tre

    Convexity characterization of virtual view reconstruction error in multi-view imaging

    No full text
    Virtual view synthesis is a key component of multi-view imaging systems that enable visual immersion environments for emerging applications, e.g., virtual reality and 360-degree video. Using a small collection of captured reference viewpoints, this technique reconstructs any view of a remote scene of interest navigated by a user, to enhance the perceived immersion experience. We carry out a convexity characterization analysis of the virtual view reconstruction error that is caused by compression of the captured multi-view content. This error is expressed as a function of the virtual viewpoint coordinate relative to the captured reference viewpoints. We derive fundamental insights about the nature of this dependency and formulate a prediction framework that is able to accurately predict the specific dependency shape, convex or concave, for given reference views, multi-view content and compression settings. We are able to integrate our analysis into a proof-of-concept coding framework and demonstrate considerable benefits over a baseline approach

    Local texture and geometry descriptors for fast block-based motion estimation of dynamic voxelized point clouds

    No full text
    Motion estimation in dynamic point cloud analysis or compression is a computationally intensive procedure generally involving a large search space and often complex voxel matching functions. We present an extension and improvement on prior work to speed up block-based motion estimation between temporally adjacent point clouds. We introduce local, or block-based, texture descriptors as a complement to voxel geometry description. Descriptors are organized in an occupancy map which may be efficiently computed and stored. By consulting the map, a point cloud motion estimator may significantly reduce its search space while maintaining prediction distortion at similar quality levels. The proposed texture-based occupancy maps provide significant speedup, an average of 26.9% for the tested data set, with respect to prior work.Faculdade de Tecnologia (FT)Departamento de Engenharia Elétrica (FT ENE)Instituto de Ciências Exatas (IE)Departamento de Ciência da Computação (IE CIC
    corecore